Mappings between linguistic sound and motion
نویسنده
چکیده
This paper provides an overview of the possible function of nonarbitrary mappings between linguistic form and meaning, and presents new empirical evidence showing that shared cross-modal associations may underlie motion sound-symbolism in particular. In terms of function, several lines of empirical and theoretical evidence suggest that non-arbitrary form-meaning connections could have played a crucial role in lexical emergence during language evolution. Furthermore, the persistence of such non-arbitrariness in some areas of modern language may also be highly functional, as recent data has shown that non-arbitrary forms may help to bootstrap learning in children (Imai, Kita, Nagumo, and Okada, 2008) and adults (Nielsen and Rendall, 2012). Given the functional role of these non-arbitrary mappings between linguistic form and meaning, this paper describes new experimental data demonstrating shared mappings between non-sense words and visual motion using a direct matching task. Participants were given nonsense words that varied in terms of their voicing, reduplication, and vowel quality, and asked to change the movement of a ball to match a given word. Results show that back vowels are mapped onto slower speeds, and consonant reduplication with vowel alternation is mapped onto faster speeds. These results show a shared crossmodal association between linguistic sound and motion, which is likely leveraged in sound-symbolic systems found in natural language.
منابع مشابه
Experimental Investigation of Linguistic and Parametric Descriptions of Human Motion for Animation
Computers process and store human movement in a different manner from how humans perceive and observe human movement. We describe an investigation of the mapping between the linguistic descriptions people ascribe to animated motions and the parameters utilized to produce the animations. The mapping is validated by comparing both the linguistic and parametric descriptions to similarity measures ...
متن کاملA knowledge-based, data-driven method for action-sound mapping
This paper presents a knowledge-based, data-driven method for using data describing action-sound couplings collected from a group of people to generate multiple complex mappings between the performance movements of a musician and sound synthesis. This is done by using a database of multimodal motion data collected from multiple subjects coupled with sound synthesis parameters. A series of sound...
متن کاملShaping and exploring interactive motion-sound mappings using online clustering techniques
Machine learning tools for designing motion-sound relationships often rely on a two-phase iterative process, where users must alternate between designing gestures and performing mappings. We present a first prototype of a user adaptable tool that aims at merging these design and performance steps into one fully interactive experience. It is based on an online learning implementation of a Gaussi...
متن کاملSound ”Gesturefication”, sound/gesture-mapping for the interaction with recorded sounds
With ”Gesturefication” we designate the creation of a mapping between gestural control and sound synthesis parameters based on the recording and analysis of a sound and related gestures. These gestures may have actually produced the sound or simply describe the sound from a particular point of view such as instrument performance, music conducting or active and embodied music listening. Our work...
متن کاملMotion Tracking: a Music and Dance Tool for People with Cerebral Palsy
We gave 32 people with cerebral palsy the opportunity to use a motion tracking system to convert their movements into musical sounds. Due to their physical limitations, people with Cerebral Palsy have traditionally had fewer opportunities to make music or dance. Motion tracking can palliate this inequality, since it permits any movement to be mapped to music and sound. We used two sensor system...
متن کامل